Goto

Collaborating Authors

 Fort Smith


MemEIC: A Step Toward Continual and Compositional Knowledge Editing

Seong, Jin, Park, Jiyun, Liermann, Wencke, Choi, Hongseok, Nam, Yoonji, Kim, Hyun, Lim, Soojong, Lee, Namhoon

arXiv.org Artificial Intelligence

The dynamic nature of information necessitates continuously updating large vision-language models (LVLMs). While recent knowledge editing techniques hint at promising directions, they often focus on editing a single modality (vision or language) in isolation. This prevalent practice neglects the inherent multimodality of LVLMs and the continuous nature of knowledge updates, potentially leading to suboptimal editing outcomes when considering the interplay between modalities and the need for ongoing knowledge refinement. To address these limitations, we propose MemEIC, a novel method for Continual and Compositional Knowledge Editing (CCKE) in LVLMs. MemEIC enables compositional editing of both visual and textual knowledge sequentially. Our approach employs a hybrid external-internal editor featuring a dual external memory for cross-modal evidence retrieval and dual LoRA adapters that facilitate disentangled parameter updates for each modality. A key component is a brain-inspired knowledge connector, activated selectively for compositional reasoning, that integrates information across different modalities. Experiments demonstrate that MemEIC significantly improves performance on complex multimodal questions and effectively preserves prior edits, setting a new benchmark for CCKE in LVLMs.


VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark

Huang, Han, Zhong, Haitian, Yu, Tao, Liu, Qiang, Wu, Shu, Wang, Liang, Tan, Tieniu

arXiv.org Artificial Intelligence

Recently, knowledge editing on large language models (LLMs) has received considerable attention. Compared to this, editing Large Vision-Language Models (LVLMs) faces extra challenges from diverse data modalities and complicated model components, and data for LVLMs editing are limited. The existing LVLM editing benchmark, which comprises three metrics (Reliability, Locality, and Generality), falls short in the quality of synthesized evaluation images and cannot assess whether models apply edited knowledge in relevant content. Therefore, we employ more reliable data collection methods to construct a new Large $\textbf{V}$ision-$\textbf{L}$anguage Model $\textbf{K}$nowledge $\textbf{E}$diting $\textbf{B}$enchmark, $\textbf{VLKEB}$, and extend the Portability metric for more comprehensive evaluation. Leveraging a multi-modal knowledge graph, our image data are bound with knowledge entities. This can be further used to extract entity-related knowledge, which constitutes the base of editing data. We conduct experiments of different editing methods on five LVLMs, and thoroughly analyze how do they impact the models. The results reveal strengths and deficiencies of these methods and hopefully provide insights for future research. The codes and dataset are available at: $\href{https://github.com/VLKEB/VLKEB}{\text{https://github.com/VLKEB/VLKEB}}$.


How artificial intelligence is going to cure America's sick health care system

#artificialintelligence

For decades, technology has relentlessly made phones, laptops, apps and entire industries cheaper and better--while health care has stubbornly loitered in an alternate universe where tech makes everything more expensive and more complex. Now startups are applying artificial intelligence (AI), floods of data and automation in ways that promise to dramatically drive down the costs of health care while increasing effectiveness. If this profound trend plays out, within five to 10 years, Congress won't have to fight about the exploding costs of Medicaid and insurance. Instead, it might battle over what to do with a massive windfall. Today's debate over the repeal of Obamacare would come to seem as backward as a discussion about the merits of leeching. One proof point is in the maelstrom of activity around diabetes, the most expensive disease in the world.